learning graph representation
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada > Quebec > Montreal (0.04)
Learning Graph Representations with Embedding Propagation
We propose EP, Embedding Propagation, an unsupervised learning framework for graph-structured data. EP learns vector representations of graphs by passing two types of messages between neighboring nodes. Forward messages consist of label representations such as representations of words and other attributes associated with the nodes. Backward messages consist of gradients that result from aggregating the label representations and applying a reconstruction loss. Node representations are finally computed from the representation of their labels. With significantly fewer parameters and hyperparameters, an instance of EP is competitive with and often outperforms state of the art unsupervised and semi-supervised learning methods on a range of benchmark data sets.
Beyond Grids: Learning Graph Representations for Visual Recognition
We propose learning graph representations from 2D feature maps for visual recognition. Our method draws inspiration from region based recognition, and learns to transform a 2D image into a graph structure. The vertices of the graph define clusters of pixels (regions), and the edges measure the similarity between these clusters in a feature space. Our method further learns to propagate information across all vertices on the graph, and is able to project the learned graph representation back into 2D grids. Our graph representation facilitates reasoning beyond regular grids and can capture long range dependencies among regions. We demonstrate that our model can be trained from end-to-end, and is easily integrated into existing networks. Finally, we evaluate our method on three challenging recognition tasks: semantic segmentation, object detection and object instance segmentation. For all tasks, our method outperforms state-of-the-art methods.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > Canada > Quebec > Montreal (0.04)
Learning Graph Representation of Agent Diffusers
Djenouri, Youcef, Belmecheri, Nassim, Michalak, Tomasz, Dubiński, Jan, Belbachir, Ahmed Nabil, Yazidi, Anis
Diffusion-based generative models have significantly advanced text-to-image synthesis, demonstrating impressive text comprehension and zero-shot generalization. These models refine images from random noise based on textual prompts, with initial reliance on text input shifting towards enhanced visual fidelity over time. This transition suggests that static model parameters might not optimally address the distinct phases of generation. We introduce LGR-AD (Learning Graph Representation of Agent Diffusers), a novel multi-agent system designed to improve adaptability in dynamic computer vision tasks. LGR-AD models the generation process as a distributed system of interacting agents, each representing an expert sub-model. These agents dynamically adapt to varying conditions and collaborate through a graph neural network that encodes their relationships and performance metrics. Our approach employs a coordination mechanism based on top-$k$ maximum spanning trees, optimizing the generation process. Each agent's decision-making is guided by a meta-model that minimizes a novel loss function, balancing accuracy and diversity. Theoretical analysis and extensive empirical evaluations show that LGR-AD outperforms traditional diffusion models across various benchmarks, highlighting its potential for scalable and flexible solutions in complex image generation tasks. Code is available at: https://github.com/YousIA/LGR_AD
- Europe > Poland > Masovia Province > Warsaw (0.04)
- Europe > Norway > Eastern Norway > Oslo (0.04)
- North America > United States > Michigan > Wayne County > Detroit (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
Reviews: Learning Graph Representations with Embedding Propagation
The authors introduce embedding propagation (EP), a new message-passing method for learning representations of attributed vertices in graphs. EP computes vector representations of nodes from the'labels' (sparse features) associated with nodes and their neighborhood. The learning of these representations is facilitated by two different types of messages sent along edges: a'forward' message that sends the current representation of the node, and a'backward' message that passes back the gradients of some differentiable reconstruction loss. The authors report results that are competitive with or outperform baseline representation learning methods such as deepwalk and node2vec. Quality: The quality of the paper is high.
Reviews: Beyond Grids: Learning Graph Representations for Visual Recognition
The paper proposes to learn graph representations from visual data via graph convolutional unit (GCU). It transforms a 2D feature maps extracted from a neural network to a sample-dependent graph, where pixels with similar features form a vertex and edges measure affinity of vertices in a feature space. Then graph convolutions are applied to pass information along the edges of the graph and update the vertex features. Finally, the updated vertex features are projected back to 2D grids based on the pixel-to-vertex assignment. GCU can be integrated into existing networks allowing end-to-end training and capturing long-range dependencies among regions (vertices).
A Dataset for Learning Graph Representations to Predict Customer Returns in Fashion Retail
McGowan, Jamie, Guest, Elizabeth, Yan, Ziyang, Zheng, Cong, Patel, Neha, Cusack, Mason, Donaldson, Charlie, de Cnudde, Sofie, Facini, Gabriel, Dzogang, Fabon
We present a novel dataset collected by ASOS (a major online fashion retailer) to address the challenge of predicting customer returns in a fashion retail ecosystem. With the release of this substantial dataset we hope to motivate further collaboration between research communities and the fashion industry. We first explore the structure of this dataset with a focus on the application of Graph Representation Learning in order to exploit the natural data structure and provide statistical insights into particular features within the data. In addition to this, we show examples of a return prediction classification task with a selection of baseline models (i.e. with no intermediate representation learning step) and a graph representation based model. We show that in a downstream return prediction classification task, an F1-score of 0.792 can be found using a Graph Neural Network (GNN), improving upon other models discussed in this work. Alongside this increased F1-score, we also present a lower cross-entropy loss by recasting the data into a graph structure, indicating more robust predictions from a GNN based solution. These results provide evidence that GNNs could provide more impactful and usable classifications than other baseline models on the presented dataset and with this motivation, we hope to encourage further research into graph-based approaches using the ASOS GraphReturns dataset.
- Europe > United Kingdom > England > Greater London > London (0.06)
- North America > United States > Washington > King County > Seattle (0.06)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
Beyond Grids: Learning Graph Representations for Visual Recognition
We propose learning graph representations from 2D feature maps for visual recognition. Our method draws inspiration from region based recognition, and learns to transform a 2D image into a graph structure. The vertices of the graph define clusters of pixels ("regions"), and the edges measure the similarity between these clusters in a feature space. Our method further learns to propagate information across all vertices on the graph, and is able to project the learned graph representation back into 2D grids. Our graph representation facilitates reasoning beyond regular grids and can capture long range dependencies among regions.
Learning Graph Representations with Embedding Propagation
Duran, Alberto Garcia, Niepert, Mathias
We propose EP, Embedding Propagation, an unsupervised learning framework for graph-structured data. EP learns vector representations of graphs by passing two types of messages between neighboring nodes. Forward messages consist of label representations such as representations of words and other attributes associated with the nodes. Backward messages consist of gradients that result from aggregating the label representations and applying a reconstruction loss. Node representations are finally computed from the representation of their labels.